Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38724204

RESUMEN

BACKGROUND AND PURPOSE: Tumor segmentation is essential in surgical and treatment planning and response assessment and monitoring in pediatric brain tumors, the leading cause of cancer-related death among children. However, manual segmentation is time-consuming and has high interoperator variability, underscoring the need for more efficient methods. After training, we compared 2 deep-learning-based 3D segmentation models, DeepMedic and nnU-Net, with pediatric-specific multi-institutional brain tumor data based on multiparametric MR images. MATERIALS AND METHODS: Multiparametric preoperative MR imaging scans of 339 pediatric patients (n = 293 internal and n = 46 external cohorts) with a variety of tumor subtypes were preprocessed and manually segmented into 4 tumor subregions, ie, enhancing tumor, nonenhancing tumor, cystic components, and peritumoral edema. After training, performances of the 2 models on internal and external test sets were evaluated with reference to ground truth manual segmentations. Additionally, concordance was assessed by comparing the volume of the subregions as a percentage of the whole tumor between model predictions and ground truth segmentations using the Pearson or Spearman correlation coefficients and the Bland-Altman method. RESULTS: The mean Dice score for nnU-Net internal test set was 0.9 (SD, 0.07) (median, 0.94) for whole tumor; 0.77 (SD, 0.29) for enhancing tumor; 0.66 (SD, 0.32) for nonenhancing tumor; 0.71 (SD, 0.33) for cystic components, and 0.71 (SD, 0.40) for peritumoral edema, respectively. For DeepMedic, the mean Dice scores were 0.82 (SD, 0.16) for whole tumor; 0.66 (SD, 0.32) for enhancing tumor; 0.48 (SD, 0.27) for nonenhancing tumor; 0.48 (SD, 0.36) for cystic components, and 0.19 (SD, 0.33) for peritumoral edema, respectively. Dice scores were significantly higher for nnU-Net (P ≤ .01). Correlation coefficients for tumor subregion percentage volumes were higher (0.98 versus 0.91 for enhancing tumor, 0.97 versus 0.75 for nonenhancing tumor, 0.98 versus 0.80 for cystic components, 0.95 versus 0.33 for peritumoral edema in the internal test set). Bland-Altman plots were better for nnU-Net compared with DeepMedic. External validation of the trained nnU-Net model on the multi-institutional Brain Tumor Segmentation Challenge in Pediatrics (BraTS-PEDs) 2023 data set revealed high generalization capability in the segmentation of whole tumor, tumor core (a combination of enhancing tumor, nonenhancing tumor, and cystic components), and enhancing tumor with mean Dice scores of 0.87 (SD, 0.13) (median, 0.91), 0.83 (SD, 0.18) (median, 0.89), and 0.48 (SD, 0.38) (median, 0.58), respectively. CONCLUSIONS: The pediatric-specific data-trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors.

2.
ArXiv ; 2023 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-38106459

RESUMEN

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

3.
Nat Commun ; 14(1): 6863, 2023 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-37945573

RESUMEN

Lean muscle mass (LMM) is an important aspect of human health. Temporalis muscle thickness is a promising LMM marker but has had limited utility due to its unknown normal growth trajectory and reference ranges and lack of standardized measurement. Here, we develop an automated deep learning pipeline to accurately measure temporalis muscle thickness (iTMT) from routine brain magnetic resonance imaging (MRI). We apply iTMT to 23,876 MRIs of healthy subjects, ages 4 through 35, and generate sex-specific iTMT normal growth charts with percentiles. We find that iTMT was associated with specific physiologic traits, including caloric intake, physical activity, sex hormone levels, and presence of malignancy. We validate iTMT across multiple demographic groups and in children with brain tumors and demonstrate feasibility for individualized longitudinal monitoring. The iTMT pipeline provides unprecedented insights into temporalis muscle growth during human development and enables the use of LMM tracking to inform clinical decision-making.


Asunto(s)
Gráficos de Crecimiento , Músculo Temporal , Masculino , Femenino , Humanos , Niño , Músculo Temporal/diagnóstico por imagen , Músculo Temporal/patología
4.
Neurooncol Adv ; 5(1): vdad119, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37841693

RESUMEN

With medical software platforms moving to cloud environments with scalable storage and computing, the translation of predictive artificial intelligence (AI) models to aid in clinical decision-making and facilitate personalized medicine for cancer patients is becoming a reality. Medical imaging, namely radiologic and histologic images, has immense analytical potential in neuro-oncology, and models utilizing integrated radiomic and pathomic data may yield a synergistic effect and provide a new modality for precision medicine. At the same time, the ability to harness multi-modal data is met with challenges in aggregating data across medical departments and institutions, as well as significant complexity in modeling the phenotypic and genotypic heterogeneity of pediatric brain tumors. In this paper, we review recent pathomic and integrated pathomic, radiomic, and genomic studies with clinical applications. We discuss current challenges limiting translational research on pediatric brain tumors and outline technical and analytical solutions. Overall, we propose that to empower the potential residing in radio-pathomics, systemic changes in cross-discipline data management and end-to-end software platforms to handle multi-modal data sets are needed, in addition to embracing modern AI-powered approaches. These changes can improve the performance of predictive models, and ultimately the ability to advance brain cancer treatments and patient outcomes through the development of such models.

5.
Neurooncol Adv ; 5(1): vdad027, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37051331

RESUMEN

Background: Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods: Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients ( n = 215 internal and n = 29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training ( n = 151), validation ( n = 43), and withheld internal test ( n = 21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results: Dice similarity score (median ± SD) was 0.91 ± 0.10/0.88 ± 0.16 for the whole tumor, 0.73 ± 0.27/0.84 ± 0.29 for ET, 0.79 ± 19/0.74 ± 0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98 ± 0.02 for brain tissue in both internal/external test sets. Conclusions: Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements.

6.
medRxiv ; 2023 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-36711966

RESUMEN

Background: Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods: Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients (n=215 internal and n=29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training (n=151), validation (n=43), and withheld internal test (n=21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results: Dice similarity score (median±SD) was 0.91±0.10/0.88±0.16 for the whole tumor, 0.73±0.27/0.84±0.29 for ET, 0.79±19/0.74±0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98±0.02 for brain tissue in both internal/external test sets. Conclusions: Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements. Key Points: We proposed automated tumor segmentation and brain extraction on pediatric MRI.The volumetric measurements using our models agree with ground truth segmentations. Importance of the Study: The current response assessment in pediatric brain tumors (PBTs) is currently based on bidirectional or 2D measurements, which underestimate the size of non-spherical and complex PBTs in children compared to volumetric or 3D methods. There is a need for development of automated methods to reduce manual burden and intra- and inter-rater variability to segment tumor subregions and assess volumetric changes. Most currently available automated segmentation tools are developed on adult brain tumors, and therefore, do not generalize well to PBTs that have different radiological appearances. To address this, we propose a deep learning (DL) auto-segmentation method that shows promising results in PBTs, collected from a publicly available large-scale imaging dataset (Children's Brain Tumor Network; CBTN) that comprises multi-parametric MRI scans of multiple PBT types acquired across multiple institutions on different scanners and protocols. As a complementary to tumor segmentation, we propose an automated DL model for brain tissue extraction.

8.
Psychol Res ; 83(2): 216-226, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29500490

RESUMEN

Theories of embodied cognition propose that we recognize tools in part by reactivating sensorimotor representations of tool use in a process of simulation. If motor simulations play a causal role in tool recognition then performing a concurrent motor task should differentially modulate recognition of experienced vs. non-experienced tools. We sought to test the hypothesis that an incompatible concurrent motor task modulates conceptual processing of learned vs. non-learned objects by directly manipulating the embodied experience of participants. We trained one group to use a set of novel, 3-D printed tools under the pretense that they were preparing for an archeological expedition to Mars (manipulation group); we trained a second group to report declarative information about how the tools are stored (storage group). With this design, familiarity and visual attention to different object parts was similar for both groups, though their qualitative interactions differed. After learning, participants made familiarity judgments of auditorily presented tool names while performing a concurrent motor task or simply sitting at rest. We showed that familiarity judgments were facilitated by motor state-dependence; specifically, in the manipulation group, familiarity was facilitated by a concurrent motor task, whereas in the spatial group familiarity was facilitated while sitting at rest. These results are the first to directly show that manipulation experience differentially modulates conceptual processing of familiar vs. unfamiliar objects, suggesting that embodied representations contribute to recognizing tools.


Asunto(s)
Formación de Concepto , Juicio , Desempeño Psicomotor , Reconocimiento en Psicología , Atención , Cognición , Femenino , Humanos , Aprendizaje , Masculino , Adulto Joven
9.
Proc Natl Acad Sci U S A ; 113(5): 1453-8, 2016 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-26712004

RESUMEN

As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the "intermediate" orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.


Asunto(s)
Corteza Visual/fisiología , Humanos , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...